181 research outputs found
Adversarial training and dilated convolutions for brain MRI segmentation
Convolutional neural networks (CNNs) have been applied to various automatic
image segmentation tasks in medical image analysis, including brain MRI
segmentation. Generative adversarial networks have recently gained popularity
because of their power in generating images that are difficult to distinguish
from real images.
In this study we use an adversarial training approach to improve CNN-based
brain MRI segmentation. To this end, we include an additional loss function
that motivates the network to generate segmentations that are difficult to
distinguish from manual segmentations. During training, this loss function is
optimised together with the conventional average per-voxel cross entropy loss.
The results show improved segmentation performance using this adversarial
training procedure for segmentation of two different sets of images and using
two different network architectures, both visually and in terms of Dice
coefficients.Comment: MICCAI 2017 Workshop on Deep Learning in Medical Image Analysi
Semi-Supervised Deep Learning for Fully Convolutional Networks
Deep learning usually requires large amounts of labeled training data, but
annotating data is costly and tedious. The framework of semi-supervised
learning provides the means to use both labeled data and arbitrary amounts of
unlabeled data for training. Recently, semi-supervised deep learning has been
intensively studied for standard CNN architectures. However, Fully
Convolutional Networks (FCNs) set the state-of-the-art for many image
segmentation tasks. To the best of our knowledge, there is no existing
semi-supervised learning method for such FCNs yet. We lift the concept of
auxiliary manifold embedding for semi-supervised learning to FCNs with the help
of Random Feature Embedding. In our experiments on the challenging task of MS
Lesion Segmentation, we leverage the proposed framework for the purpose of
domain adaptation and report substantial improvements over the baseline model.Comment: 9 pages, 6 figure
Towards continual learning in medical imaging
This work investigates continual learning of two segmentation tasks in brain MRI with neural networks. To explore in this context the capabilities of current methods for countering catastrophic forgetting of the first task when a new one is learned, we investigate elastic weight consolidation, a recently proposed method based on Fisher information, originally evaluated on reinforcement learning of Atari games. We use it to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions. Our findings show this recent method reduces catastrophic forgetting, while large room for improvement exists in these challenging settings for continual learning
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
A deep level set method for image segmentation
This paper proposes a novel image segmentation approachthat integrates fully
convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the
integrated method can incorporatesmoothing and prior information to achieve an
accurate segmentation.Furthermore, different than using the level set model as
a post-processingtool, we integrate it into the training phase to fine-tune the
FCN. Thisallows the use of unlabeled data during training in a
semi-supervisedsetting. Using two types of medical imaging data (liver CT and
left ven-tricle MRI data), we show that the integrated method achieves
goodperformance even when little training data is available, outperformingthe
FCN or the level set model alone
Expected exponential loss for gaze-based video and volume ground truth annotation
Many recent machine learning approaches used in medical imaging are highly
reliant on large amounts of image and ground truth data. In the context of
object segmentation, pixel-wise annotations are extremely expensive to collect,
especially in video and 3D volumes. To reduce this annotation burden, we
propose a novel framework to allow annotators to simply observe the object to
segment and record where they have looked at with a \$200 eye gaze tracker. Our
method then estimates pixel-wise probabilities for the presence of the object
throughout the sequence from which we train a classifier in semi-supervised
setting using a novel Expected Exponential loss function. We show that our
framework provides superior performances on a wide range of medical image
settings compared to existing strategies and that our method can be combined
with current crowd-sourcing paradigms as well.Comment: 9 pages, 5 figues, MICCAI 2017 - LABELS Worksho
Domain and Geometry Agnostic CNNs for Left Atrium Segmentation in 3D Ultrasound
Segmentation of the left atrium and deriving its size can help to predict and
detect various cardiovascular conditions. Automation of this process in 3D
Ultrasound image data is desirable, since manual delineations are
time-consuming, challenging and observer-dependent. Convolutional neural
networks have made improvements in computer vision and in medical image
analysis. They have successfully been applied to segmentation tasks and were
extended to work on volumetric data. In this paper we introduce a combined
deep-learning based approach on volumetric segmentation in Ultrasound
acquisitions with incorporation of prior knowledge about left atrial shape and
imaging device. The results show, that including a shape prior helps the domain
adaptation and the accuracy of segmentation is further increased with
adversarial learning
Context label learning: improving background class representations in semantic segmentation
Background samples provide key contextual information for segmenting regions of interest (ROIs). However, they always cover a diverse set of structures, causing difficulties for the segmentation model to learn good decision boundaries with high sensitivity and precision. The issue concerns the highly heterogeneous nature of the background class, resulting in multi-modal distributions. Empirically, we find that neural networks trained with heterogeneous background struggle to map the corresponding contextual samples to compact clusters in feature space. As a result, the distribution over background logit activations may shift across the decision boundary, leading to systematic over-segmentation across different datasets and tasks. In this study, we propose context label learning (CoLab) to improve the context representations by decomposing the background class into several subclasses. Specifically, we train an auxiliary network as a task generator, along with the primary segmentation model, to automatically generate context labels that positively affect the ROI segmentation accuracy. Extensive experiments are conducted on several challenging segmentation tasks and datasets. The results demonstrate that CoLab can guide the segmentation model to map the logits of background samples away from the decision boundary, resulting in significantly improved segmentation accuracy. Code is available
Domain generalization via model-agnostic learning of semantic features
Generalization capability to unseen domains is crucial for machine learning modelswhen deploying to real-world conditions. We investigate the challenging problemof domain generalization, i.e., training a model on multi-domain source data suchthat it can directly generalize to target domains with unknown statistics. We adopta model-agnostic learning paradigm with gradient-based meta-train and meta-testprocedures to expose the optimization to domain shift. Further, we introducetwo complementary losses which explicitly regularize the semantic structure ofthe feature space. Globally, we align a derived soft confusion matrix to preservegeneral knowledge about inter-class relationships. Locally, we promote domain-independent class-specific cohesion and separation of sample features with ametric-learning component. The effectiveness of our method is demonstrated withnew state-of-the-art results on two common object recognition benchmarks. Ourmethod also shows consistent improvement on a medical image segmentation task
Tversky loss function for image segmentation using 3D fully convolutional deep networks
Fully convolutional deep neural networks carry out excellent potential for
fast and accurate image segmentation. One of the main challenges in training
these networks is data imbalance, which is particularly problematic in medical
imaging applications such as lesion segmentation where the number of lesion
voxels is often much lower than the number of non-lesion voxels. Training with
unbalanced data can lead to predictions that are severely biased towards high
precision but low recall (sensitivity), which is undesired especially in
medical applications where false negatives are much less tolerable than false
positives. Several methods have been proposed to deal with this problem
including balanced sampling, two step training, sample re-weighting, and
similarity loss functions. In this paper, we propose a generalized loss
function based on the Tversky index to address the issue of data imbalance and
achieve much better trade-off between precision and recall in training 3D fully
convolutional deep neural networks. Experimental results in multiple sclerosis
lesion segmentation on magnetic resonance images show improved F2 score, Dice
coefficient, and the area under the precision-recall curve in test data. Based
on these results we suggest Tversky loss function as a generalized framework to
effectively train deep neural networks
- …